Jump to content

Open letter on artificial intelligence (2015)

From Wikipedia, the free encyclopedia
Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter
CreatedJanuary 2015
Author(s)Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts
Subjectresearch on the societal impacts of AI

In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence[2] calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential "pitfalls": artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter", lays out detailed research priorities in an accompanying twelve-page document.[3]

Background

[edit]

By 2014, both physicist Stephen Hawking and business magnate Elon Musk had publicly voiced the opinion that superhuman artificial intelligence could provide incalculable benefits, but could also end the human race if deployed incautiously. At the time, Hawking and Musk both sat on the scientific advisory board for the Future of Life Institute, an organisation working to "mitigate existential risks facing humanity". The institute drafted an open letter directed to the broader AI research community,[4] and circulated it to the attendees of its first conference in Puerto Rico during the first weekend of 2015.[5] The letter was made public on January 12.[6]

Purpose

[edit]

The letter highlights both the positive and negative effects of artificial intelligence.[7] According to Bloomberg Business, Professor Max Tegmark of MIT circulated the letter in order to find common ground between signatories who consider super intelligent AI a significant existential risk, and signatories such as Professor Oren Etzioni, who believe the AI field was being "impugned" by a one-sided media focus on the alleged risks.[6] The letter contends that:

The potential benefits (of AI) are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.[8]

One of the signatories, Professor Bart Selman of Cornell University, said the purpose is to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public, the letter is meant to be informative but not alarmist.[4] Another signatory, Professor Francesca Rossi, stated that "I think it's very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues".[9]

Concerns raised by the letter

[edit]

The signatories ask: How can engineers create AI systems that are beneficial to society, and that are robust? Humans need to remain in control of AI; our AI systems must "do what we want them to do".[1] The required research is interdisciplinary, drawing from areas ranging from economics and law to various branches of computer science, such as computer security and formal verification. Challenges that arise are divided into verification ("Did I build the system right?"), validity ("Did I build the right system?"), security, and control ("OK, I built the system wrong, can I fix it?").[10]

Short-term concerns

[edit]

Some near-term concerns relate to autonomous vehicles, from civilian drones and self-driving cars. For example, a self-driving car may, in an emergency, have to decide between a small risk of a major accident and a large probability of a small accident. Other concerns relate to lethal intelligent autonomous weapons: Should they be banned? If so, how should 'autonomy' be precisely defined? If not, how should culpability for any misuse or malfunction be apportioned?

Other issues include privacy concerns as AI becomes increasingly able to interpret large surveillance datasets, and how to best manage the economic impact of jobs displaced by AI.[4]

Long-term concerns

[edit]

The document closes by echoing Microsoft research director Eric Horvitz's concerns that:

we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ... What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an "intelligence explosion"?

Existing tools for harnessing AI, such as reinforcement learning and simple utility functions, are inadequate to solve this; therefore more research is necessary to find and validate a robust solution to the "control problem".[10]

Signatories

[edit]

Signatories include physicist Stephen Hawking, business magnate Elon Musk, the entrepreneurs behind DeepMind and Vicarious, Google's director of research Peter Norvig,[1] Professor Stuart J. Russell of the University of California, Berkeley,[11] and other AI experts, robot makers, programmers, and ethicists.[12] The original signatory count was over 150 people,[13] including academics from Cambridge, Oxford, Stanford, Harvard, and MIT.[14]

Notes

[edit]
  1. ^ a b c d Sparkes, Matthew (13 January 2015). "Top scientists call for caution over artificial intelligence". The Telegraph (UK). Retrieved 24 April 2015.
  2. ^ "FLI - Future of Life Institute | AI Open Letter". 2015-11-02. Archived from the original on 2015-11-02. Retrieved 2024-07-09.
  3. ^ Russell, Stuart; Dewey, Daniel; Tegmar, Max (2015-01-23). "Research priorities for robust and beneficial artificial intelligence" (PDF). Archived (PDF) from the original on 2015-12-01.
  4. ^ a b c Chung, Emily (13 January 2015). "AI must turn focus to safety, Stephen Hawking and other researchers say". Canadian Broadcasting Corporation. Retrieved 24 April 2015.
  5. ^ McMillan, Robert (16 January 2015). "AI Has Arrived, and That Really Worries the World's Brightest Minds". Wired. Retrieved 24 April 2015.
  6. ^ a b Dina Bass; Jack Clark (4 February 2015). "Is Elon Musk Right About AI? Researchers Don't Think So". Bloomberg Business. Retrieved 24 April 2015.
  7. ^ Bradshaw, Tim (12 January 2015). "Scientists and investors warn on AI". The Financial Times. Retrieved 24 April 2015. Rather than fear-mongering, the letter is careful to highlight both the positive and negative effects of artificial intelligence.
  8. ^ "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". Future of Life Institute. Retrieved 24 April 2015.
  9. ^ "Big science names sign open letter detailing AI danger". New Scientist. 14 January 2015. Retrieved 24 April 2015.
  10. ^ a b "Research priorities for robust and beneficial artificial intelligence" (PDF). Future of Life Institute. 23 January 2015. Retrieved 24 April 2015.
  11. ^ Wolchover, Natalie (21 April 2015). "Concerns of an Artificial Intelligence Pioneer". Quanta magazine. Retrieved 24 April 2015.
  12. ^ "Experts pledge to rein in AI research". BBC News. 12 January 2015. Retrieved 24 April 2015.
  13. ^ Hern, Alex (12 January 2015). "Experts including Elon Musk call for research to avoid AI 'pitfalls'". The Guardian. Retrieved 24 April 2015.
  14. ^ Griffin, Andrew (12 January 2015). "Stephen Hawking, Elon Musk and others call for research to avoid dangers of artificial intelligence". The Independent. Archived from the original on 2022-05-24. Retrieved 24 April 2015.
[edit]